2,201,104 research outputs found
Warped Functional Analysis of Variance
This article presents an Analysis of Variance model for functional data that
explicitly incorporates phase variability through a time-warping component,
allowing for a unified approach to estimation and inference in presence of
amplitude and time variability. The focus is on single-random-factor models but
the approach can be easily generalized to more complex ANOVA models. The
behavior of the estimators is studied by simulation, and an application to the
analysis of growth curves of flour beetles is presented. Although the model
assumes a smooth latent process behind the observed trajectories, smoothness of
the observed data is not required; the method can be applied to the sparsely
observed data that is often encountered in longitudinal studies
DISCO analysis: A nonparametric extension of analysis of variance
In classical analysis of variance, dispersion is measured by considering
squared distances of sample elements from the sample mean. We consider a
measure of dispersion for univariate or multivariate response based on all
pairwise distances between-sample elements, and derive an analogous distance
components (DISCO) decomposition for powers of distance in . The ANOVA F
statistic is obtained when the index (exponent) is 2. For each index in
, this decomposition determines a nonparametric test for the
multi-sample hypothesis of equal distributions that is statistically consistent
against general alternatives.Comment: Published in at http://dx.doi.org/10.1214/09-AOAS245 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Analysis of variance for bayesian inference
This paper develops a multi-way analysis of variance for non-Gaussian multivariate distributions and provides a practical simulation algorithm to estimate the corresponding components of variance. It specifically addresses variance in Bayesian predictive distributions, showing that it may be decomposed into the sum of extrinsic variance, arising from posterior uncertainty about parameters, and intrinsic variance, which would exist even if parameters were known. Depending on the application at hand, further decomposition of extrinsic or intrinsic variance (or both) may be useful. The paper shows how to produce simulation-consistent estimates of all of these components, and the method demands little additional effort or computing time beyond that already invested in the posterior simulator. It illustrates the methods using a dynamic stochastic general equilibrium model of the US economy, both before and during the global financial crisis. JEL Classification: C11, C53Analysis of variance, Bayesian inference, posterior simulation, predictive distributions
MEAN-VARIANCE ANALYSIS OF ALTERNATIVE HEDGING STRATEGIES
Demand and Price Analysis,
Variance Analysis of Randomized Consensus in Switching Directed Networks
In this paper, we study the asymptotic properties of distributed consensus
algorithms over switching directed random networks. More specifically, we focus
on consensus algorithms over independent and identically distributed, directed
Erdos-Renyi random graphs, where each agent can communicate with any other
agent with some exogenously specified probability . While it is well-known
that consensus algorithms over Erdos-Renyi random networks result in an
asymptotic agreement over the network, an analytical characterization of the
distribution of the asymptotic consensus value is still an open question. In
this paper, we provide closed-form expressions for the mean and variance of the
asymptotic random consensus value, in terms of the size of the network and the
probability of communication . We also provide numerical simulations that
illustrate our results.Comment: 6 pages, 3 figures, submitted to American Control Conference 201
Dimensionality reduction of optimization problems using variance based sensitivity analysis
We propose a new interaction index derived from the computation of Sobol indices. In optimization, interaction index can be used to detect lack of interaction among input parameters. First order interaction indices if they return zero, means that those parameters can be optimized independently holding other parameters constant. Likewise, second order interaction indices can tell if a combination of two parameter can be optimized independently of other parameters. In this way, the original optimization problem may be decomposed into a set of lower dimensional problems which may then be solved independently and in parallel. The interaction indices can potentially be useful in robust optimization as well, since it provides importance measure in minimizing output variances
- …